Meta data, i.e. number of instances and parameters as well as configuration budget.
| Run with best incumbent | examples/spear_qcp_small/example_output/run_3 | Cutoff | 5 |
| # Train instances | 10 | Walltime budget | 50 |
| # Test instances | 10 | Runcount budget | inf |
| # Parameters | 26 | CPU budget | inf |
| Deterministic | False |
Comparing parameters of default and incumbent. Parameters that differ from default to incumbent are presented first.
| Default | Incumbent | |
|---|---|---|
| -------------- Changed parameters: -------------- | ----- | ----- |
| sp-clause-activity-inc | 1 | 1.24374 |
| sp-clause-decay | 1.4 | 1.06506 |
| sp-first-restart | 100 | 309 |
| sp-learned-clause-sort-heur | 0 | 14 |
| sp-learned-clauses-inc | 1.3 | 1.40704 |
| sp-learned-size-factor | 0.4 | 1.19767 |
| sp-orig-clause-sort-heur | 0 | 10 |
| sp-phase-dec-heur | 5 | 3 |
| sp-rand-var-dec-freq | 0.001 | 0.05 |
| sp-resolution | 1 | 0 |
| sp-restart-inc | 1.5 | 1.5815 |
| sp-update-dec-queue | 1 | 0 |
| sp-use-pure-literal-rule | 1 | 0 |
| sp-var-activity-inc | 1 | 0.50737 |
| sp-var-dec-heur | 0 | 8 |
| sp-variable-decay | 1.4 | 1.6578 |
| sp-max-res-lit-inc | 1 | inactive |
| sp-max-res-runs | 4 | inactive |
| sp-rand-var-dec-scaling | 1 | 0.62238 |
| sp-res-cutoff-cls | 8 | inactive |
| sp-res-cutoff-lits | 400 | inactive |
| sp-res-order-heur | 0 | inactive |
| sp-rand-phase-scaling | 1 | 0.501992 |
| -------------- Unchanged parameters: -------------- | ----- | ----- |
| sp-clause-del-heur | 2 | 2 |
| sp-rand-phase-dec-freq | 0.001 | 0.001 |
Contains different ways of analyzing the final incumbent and the performance of the algorithm's default parameter configuration.
The most common way to compare performances of algorithms on the same set of instances. Entries in the table depend on the cost metric of the configurator run. For scenarios optimizing running time, this includes average runtime, penalized average runtime as well as number of timeouts.
| Default | Incumbent | |||
|---|---|---|---|---|
| Train | Test | Train | Test | |
| PAR10 | 0.027 | 0.017 | 0.017 | 0.012 |
| PAR1 | 0.027 | 0.017 | 0.017 | 0.012 |
| Timeouts | 0/10 | 0/10 | 0/10 | 0/10 |
Depicts cost distributions over the set of instances. Since these are empirical distributions, the plots show step functions. These plots provide insights into how well configurations perform up to a certain threshold. For runtime scenarios this shows the probability of solving all instances from the set in a given timeframe. On the left side the training-data is scattered, on the right side the test-data is scattered.
Scatter plots show the costs of the default and optimized parameter configuration on each instance. Since this looses detailed information about the individual cost on each instance by looking at aggregated cost values in tables, scatter plots provide a more detailed picture. They provide insights whether overall performance improvements can be explained only by some outliers or whether they are due to improvements on the entire instance set. On the left side the training-data is scattered, on the right side the test-data is scattered.
The instance features are projected into a two dimensional space using principal component analysis (PCA) and the footprint of each algorithm is plotted, i.e., on which instances the default or the optimized configuration performs well. In contrast to the other analysis methods in this section, these plots allow insights into which of the two configurations performs well on specific types or clusters of instances. Inspired by Smith-Miles.
The instance features are projected into a two dimensional space using principal component analysis (PCA) and the footprint of each algorithm is plotted, i.e., on which instances the default or the optimized configuration performs well. In contrast to the other analysis methods in this section, these plots allow insights into which of the two configurations performs well on specific types or clusters of instances. Inspired by Smith-Miles.: footprint_incumbent
The instance features are projected into a two dimensional space using principal component analysis (PCA) and the footprint of each algorithm is plotted, i.e., on which instances the default or the optimized configuration performs well. In contrast to the other analysis methods in this section, these plots allow insights into which of the two configurations performs well on specific types or clusters of instances. Inspired by Smith-Miles.: footprint_default
Analysis of the trajectory and the runhistory returned by a configurator to gain insights into how the configurator tried to find a well-performing configuration.
Analysis of the iteratively sampled configurations during the optimization procedure. Multi-dimensional scaling (MDS) is used to reduce dimensionality of the search space and plot the distribution of evaluated configurations. The larger the dot, the more often the configuration was evaluated on instances from the set. The colours correspond to the predicted performance in that part of the search space.